Deformable image registration is a key task in medical image analysis. The Brain Tumor Sequence Registration challenge (BraTS-Reg) aims at establishing correspondences between pre-operative and follow-up scans of the same patient diagnosed with an adult brain diffuse high-grade glioma and intends to address the challenging task of registering longitudinal data with major tissue appearance changes. In this work, we proposed a two-stage cascaded network based on the Inception and TransMorph models. The dataset for each patient was comprised of a native pre-contrast (T1), a contrast-enhanced T1-weighted (T1-CE), a T2-weighted (T2), and a Fluid Attenuated Inversion Recovery (FLAIR). The Inception model was used to fuse the 4 image modalities together and extract the most relevant information. Then, a variant of the TransMorph architecture was adapted to generate the displacement fields. The Loss function was composed of a standard image similarity measure, a diffusion regularizer, and an edge-map similarity measure added to overcome intensity dependence and reinforce correct boundary deformation. We observed that the addition of the Inception module substantially increased the performance of the network. Additionally, performing an initial affine registration before training the model showed improved accuracy in the landmark error measurements between pre and post-operative MRIs. We observed that our best model composed of the Inception and TransMorph architectures while using an initially affine registered dataset had the best performance with a median absolute error of 2.91 (initial error = 7.8). We achieved 6th place at the time of model submission in the final testing phase of the BraTS-Reg challenge.
translated by 谷歌翻译
Event Extraction (EE) is one of the fundamental tasks in Information Extraction (IE) that aims to recognize event mentions and their arguments (i.e., participants) from text. Due to its importance, extensive methods and resources have been developed for Event Extraction. However, one limitation of current research for EE involves the under-exploration for non-English languages in which the lack of high-quality multilingual EE datasets for model training and evaluation has been the main hindrance. To address this limitation, we propose a novel Multilingual Event Extraction dataset (MEE) that provides annotation for more than 50K event mentions in 8 typologically different languages. MEE comprehensively annotates data for entity mentions, event triggers and event arguments. We conduct extensive experiments on the proposed dataset to reveal challenges and opportunities for multilingual EE.
translated by 谷歌翻译
图形神经网络(GNN)是专门为图形数据设计的深度学习模型,它们通常依靠节点特征作为第一层的输入。在没有节点功能的图形上应用这种类型的网络时,可以提取基于图的节点特征(例如,度数数)或在训练网络时学习输入节点表示(即嵌入)。训练节点嵌入的后一个方法更有可能导致性能更好,而与嵌入的参数数量与节点数量线性增长。因此,在处理工业规模的图形数据时,以端到端方式以端到端方式训练输入节点嵌入式(GPU)内存中的GNN是不切实际的。受到为自然语言处理(NLP)任务开发的嵌入压缩方法的启发,我们开发了一种节点嵌入压缩方法,其中每个节点都用一个位向量而不是浮点数向量表示。在压缩方法中使用的参数可以与GNN一起训练。我们表明,与替代方案相比,提出的节点嵌入压缩方法的性能优于性能。
translated by 谷歌翻译
我们提出了一种贪婪算法,以在$ p $输入功能中为非线性预测问题选择$ n $重要功能。在迭代损失最小化过程中,顺序选择这些功能。我们将神经网络用作算法中的预测因子来计算损失,因此我们将方法称为神经贪婪追求(NGP)。NGP在选择$ n \ ll p $时可以有效地选择$ n $功能,并且在顺序选择过程之后,它在降序中提供了特征重要性的概念。我们通过实验表明,NGP比多种特征选择方法(例如Deeplift和Drop-One-One-One Out损失)提供了更好的性能。此外,我们在实验上显示了一种相变行为,当训练数据大小超过阈值时,可以完美选择所有$ n $功能,而无需误报。
translated by 谷歌翻译
Covid-19疾病迅速蔓延,在中国确认第一个积极案件后近三个月,冠状病毒开始遍布美国。一些州和县报告了大量的积极病例和死亡,而一些据报道的Covid-19相关病例和死亡率。本文在县级分析了可能影响Covid-19感染和死亡率风险的因素。使用K-Means聚类和多种分类模型的创新方法来确定最关键的因素。结果表明,平均温度,低于贫困人数,肥胖,空气压力,人口密度,风力速度,经度和未知人民百分比的成年人的百分比是最重要的属性
translated by 谷歌翻译
We propose an efficient method to generate white-box adversarial examples to trick a character-level neural classifier. We find that only a few manipulations are needed to greatly decrease the accuracy. Our method relies on an atomic flip operation, which swaps one token for another, based on the gradients of the onehot input vectors. Due to efficiency of our method, we can perform adversarial training which makes the model more robust to attacks at test time. With the use of a few semantics-preserving constraints, we demonstrate that HotFlip can be adapted to attack a word-level classifier as well.
translated by 谷歌翻译